library(readxl)
library(corrplot)
library(lm.beta)
library(Hmisc)
library(qgraph)
library(plyr)
library(dplyr)
library(psych)
library(neurobase)
library(oro.nifti)
library(dplyr)
library(png)
library(ggiraphExtra)
library(data.table)
library(cowplot)
library(ggplot2)
library(patchwork)
# library(NMF)
library(ggbeeswarm)
#source("/Volumes/Users/nfranzme/R_Projects/R_Imaging/workbench/workbench_map_volume_to_surface#.R")
library(ggpubr)
library(lubridate)
library(reticulate)
library(svMisc)
library(NetworkToolbox)
library(readxl)
library(writexl)
library(lme4)
library(lmerTest)
library(stringr)
library(boot)
library(table1)
dir.root = "/Volumes/NO NAME/fMRI_Day2_Graph_Theory/"
dir.fc.200 = paste0(dir.root, "/Schaefer200_functional_connectivity/")
dir.data.sheets = paste0(dir.root, "/data_sheets/")
Step 1: load and prepare the data
The data frame contains data from the Alzheimer’s disease neuroimaging initiative (ADNI) database.
ADNI is a large multi-center study that focuses on multi-modal neuroimaging in Alzheimer’s disease patients, more information can be found here: http://adni.loni.usc.edu
Included imaging modalities in the current dataset are tau-PET, amyloid-PET and resting-state fMRI. We will subset the data to healthy controls (i.e. cognitively normal [CN] individuals without evidence for amyloid pathology [Ab.neg]), and subjects across the Alzheimer’s disease (AD) spectrum, defined as having abnormally elevated amyloid-beta (Ab) based on amyloid-PET. The AD spectrum includes individuals who are still cognitively normal (CN.Ab.pos, i.e. preclinical AD), subjects who show mild cognitive impairment (MCI.Ab.pos, i.e. prodromal AD), and patients with AD dementia (Dementia.Ab.pos)
For this course we will work primarily with cross-sectional data, stored in the “ADNI.data.bl” data frame. However, some individuals also have longitudinal tau-PET, you could have a look at those data to if there is time left.
# read in ADNI data
ADNI.data <- readxl::read_xlsx(paste0(dir.data.sheets, "/ADNI_fMRI_PET.xlsx"))
ADNI.data <- subset(ADNI.data, ID %nin% c("sub-4431", "sub-2133", "sub-6559"))
# merge diagnosis and amyloid status
ADNI.data$DX.Ab <- paste0(ADNI.data$DX, ".", ADNI.data$amyloid.global.SUVR.status)
ADNI.data$DX.Ab <- factor(ADNI.data$DX.Ab,
levels = c("CN.Ab.neg",
"CN.Ab.pos",
"MCI.Ab.pos",
"Dementia.Ab.pos"))
# subset to controls and subjects across the AD spectrum (CN = cognitively normal, MCI = Mild Cognitive Impairment, Dementia = Alzheimer's disease dementia)
ADNI.data <- subset(ADNI.data, DX.Ab %in% c("CN.Ab.neg", "CN.Ab.pos", "MCI.Ab.pos", "Dementia.Ab.pos"))
# locate functional connectivity matrices
ADNI.data$FC.mat.200 <- str_replace(ADNI.data$FC.mat.200, "/Network/Cluster/ADNI/functional_connectivity/Schaefer_200/",
paste0(dir.root, "/Schaefer200_functional_connectivity/"))
# select baseline data
ADNI.data.bl = subset(ADNI.data, tau.pet.fu.yrs == 0)
First, you should get an idea of the type of data we’re working with. So we will check some basic distributions of biomarker and cognitive data for the current patient cohort.
We can first create an overview table of demographics, as well as amyloid-PET and tau-PET load, to
see how biomarkers are distributed across diagnostic groups.
We’ll look at the following variables (Variable names on the left, explanation on the right)
age = age
sex = sex
centiloid = global amyloid-PET level (a typical “cut-off” for abnormality is 20)
tau.global.SUVR = global tau-PET level (a typical “cut-off” for abnormality is 1.3)
MMSE = Mini Mental State Exam, i.e. a global cognitive screening instrument, where lower values indicate stronger cognitive impairment
ADNI_MEM = A memory composite score across several memory tests. Lower values indicate stronger impairment
table1(~ age + factor(sex) + centiloid + tau.global.SUVR + MMSE + ADNI_MEM | DX.Ab, data = ADNI.data.bl)
| CN.Ab.neg (N=192) |
CN.Ab.pos (N=67) |
MCI.Ab.pos (N=37) |
Dementia.Ab.pos (N=19) |
Overall (N=315) |
|
|---|---|---|---|---|---|
| age | |||||
| Mean (SD) | 72.2 (6.96) | 74.8 (5.96) | 75.9 (6.75) | 80.8 (8.09) | 73.7 (7.15) |
| Median [Min, Max] | 71.0 [56.0, 92.0] | 75.0 [65.0, 90.0] | 76.0 [61.0, 90.0] | 83.0 [62.0, 94.0] | 73.0 [56.0, 94.0] |
| factor(sex) | |||||
| F | 118 (61.5%) | 41 (61.2%) | 19 (51.4%) | 8 (42.1%) | 186 (59.0%) |
| M | 74 (38.5%) | 26 (38.8%) | 18 (48.6%) | 11 (57.9%) | 129 (41.0%) |
| centiloid | |||||
| Mean (SD) | -5.58 (12.8) | 65.6 (34.7) | 76.4 (31.7) | 93.0 (40.7) | 25.1 (45.6) |
| Median [Min, Max] | -7.12 [-34.3, 21.9] | 58.3 [23.2, 141] | 74.6 [22.8, 142] | 90.6 [44.2, 198] | 6.74 [-34.3, 198] |
| tau.global.SUVR | |||||
| Mean (SD) | 1.06 (0.0741) | 1.10 (0.0999) | 1.21 (0.222) | 1.36 (0.404) | 1.11 (0.164) |
| Median [Min, Max] | 1.06 [0.865, 1.45] | 1.10 [0.904, 1.62] | 1.15 [0.908, 1.83] | 1.23 [0.971, 2.37] | 1.08 [0.865, 2.37] |
| MMSE | |||||
| Mean (SD) | 29.3 (0.856) | 28.7 (1.74) | 26.7 (2.45) | 23.0 (4.73) | 27.7 (3.13) |
| Median [Min, Max] | 30.0 [27.0, 30.0] | 29.5 [24.0, 30.0] | 27.0 [22.0, 30.0] | 23.5 [14.0, 29.0] | 29.0 [14.0, 30.0] |
| Missing | 156 (81.3%) | 49 (73.1%) | 16 (43.2%) | 7 (36.8%) | 228 (72.4%) |
| ADNI_MEM | |||||
| Mean (SD) | 1.05 (0.566) | 0.924 (0.527) | 0.143 (0.584) | -0.685 (0.711) | 0.812 (0.740) |
| Median [Min, Max] | 0.991 [-0.229, 3.14] | 0.874 [-0.119, 1.94] | 0.0840 [-0.964, 1.69] | -0.509 [-2.39, 0.603] | 0.836 [-2.39, 3.14] |
| Missing | 1 (0.5%) | 0 (0%) | 0 (0%) | 0 (0%) | 1 (0.3%) |
As you can see, amyloid and tau levels increase (i.e. become more abnormal) across more severe
disease levels. Also, MMSE and ADNI_MEM values decrease, indicating stronger cognitive impairment.
Next, we should look at biomarker distributions of amyloid and tau pathology,
to get an idea of how amyloid and tau relate to increasing disease severity.
We’ll use ggplot2 for plotting. Some useful ggplot2 tutorials can be found here:
https://ggplot2.tidyverse.org
http://r-statistics.co/Complete-Ggplot2-Tutorial-Part1-With-R-Code.html
http://r-statistics.co/ggplot2-Tutorial-With-R.html
Let’s first create a boxplot/beeswarm plot to illustrate the amyloid distribution across different diagnostic groups. The variables we’ll look at are centiloid (i.e. amyloid-PET) and DX.Ab (i.e. diagnostic groups)
# plot group differences
ggplot(data = ADNI.data.bl,
aes(
x = DX.Ab,
y = centiloid)) +
geom_boxplot(alpha = 0.1, notch = T, width = 0.5) +
geom_beeswarm(alpha = 0.4) +
theme_minimal() +
geom_hline(yintercept = 20, linetype = 2)
As you can see, amyloid levels increase across the AD spectrum. The dashed line indicates where amyloid becomes abnormal.
To statistically test these group differences, we can run an ANCOVA. It’s quite common standard to control for some usual covariates in clinical research, such as age and sex (and often also education)
So, lets quantify the group differences using an ANCOVA with a post-hoc Tukey Test. The ANCOVA tests whether there is an overall group-difference between diagnostic groups. The Post-Hoc Tukey test assesses the difference between single diagnostic groups.
# test group differences (ANCOVA), controlling for age and sex
## run ANCOVA
tmp.aov <- aov(data = ADNI.data.bl,
centiloid ~ DX.Ab + age + sex);
## summarize output
summary(tmp.aov)
## Df Sum Sq Mean Sq F value Pr(>F)
## DX.Ab 3 475451 158484 280.349 <2e-16 ***
## age 1 1435 1435 2.538 0.112
## sex 1 1078 1078 1.908 0.168
## Residuals 309 174681 565
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## run post-hoc Tukey test to determine group differences
TukeyHSD(tmp.aov, which = "DX.Ab")
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = centiloid ~ DX.Ab + age + sex, data = ADNI.data.bl)
##
## $DX.Ab
## diff lwr upr p adj
## CN.Ab.pos-CN.Ab.neg 71.16379 62.4493273 79.87825 0.0000000
## MCI.Ab.pos-CN.Ab.neg 81.99277 70.9660878 93.01945 0.0000000
## Dementia.Ab.pos-CN.Ab.neg 98.54186 83.7714568 113.31227 0.0000000
## MCI.Ab.pos-CN.Ab.pos 10.82898 -1.7503318 23.40830 0.1191229
## Dementia.Ab.pos-CN.Ab.pos 27.37807 11.4151082 43.34104 0.0000770
## Dementia.Ab.pos-MCI.Ab.pos 16.54909 -0.7847542 33.88294 0.0673472
Let’s do the same for tau-PET, so lets first create a boxplot/beeswarm plot to illustrate the tau distribution across diagnostic groups
# plot group differences
ggplot(data = ADNI.data.bl,
aes(
x = DX.Ab,
y = tau.global.SUVR)) +
geom_boxplot(alpha = 0.1, notch = T, width = 0.5) +
geom_beeswarm(alpha = 0.4) +
theme_minimal() +
geom_hline(yintercept = 1.3, linetype = 2)
As you can see, tau levels also increase across the AD spectrum. However, people usually start surpassing the threshold at symptomatic stages (MCI and Dementia). This is the case because amyloid precedes symptom onset by decades, while tau pathology is much closer to symptom onset. In fact, tau pathology is the key driver of neurodegeneration and cognitive decline, so you typically see abnormal tau first when people start to show symptoms.
To statistically test these group differences, we can again run an ANCOVA with a post-hoc Tukey Test
# test group differences (ANCOVA), controlling for age and sex
## run ANCOVA
tmp.aov <- aov(data = ADNI.data.bl,
tau.global.SUVR ~ DX.Ab + age + sex);
## summarize output
summary(tmp.aov)
## Df Sum Sq Mean Sq F value Pr(>F)
## DX.Ab 3 1.998 0.6660 34.412 < 2e-16 ***
## age 1 0.350 0.3503 18.101 2.78e-05 ***
## sex 1 0.084 0.0841 4.345 0.0379 *
## Residuals 309 5.981 0.0194
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## run post-hoc Tukey test to determine group differences
TukeyHSD(tmp.aov, which = "DX.Ab")
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = tau.global.SUVR ~ DX.Ab + age + sex, data = ADNI.data.bl)
##
## $DX.Ab
## diff lwr upr p adj
## CN.Ab.pos-CN.Ab.neg 0.03695129 -0.01403989 0.08794248 0.2423433
## MCI.Ab.pos-CN.Ab.neg 0.15096161 0.08644085 0.21548237 0.0000000
## Dementia.Ab.pos-CN.Ab.neg 0.29530003 0.20887351 0.38172655 0.0000000
## MCI.Ab.pos-CN.Ab.pos 0.11401032 0.04040458 0.18761605 0.0004571
## Dementia.Ab.pos-CN.Ab.pos 0.25834874 0.16494414 0.35175334 0.0000000
## Dementia.Ab.pos-MCI.Ab.pos 0.14433842 0.04291236 0.24576449 0.0015846
One thing to keep in mind is that the above described analyses on tau-PET are based on “global” tau levels. However, tau does not - in contrast to amyloid - accumulate globally throughout the brain, but rather follows a consecutive spreading pattern that typically starts in the inferior temporal lobe. This “spreading pattern” of tau pathology has been already described in the nineties by Braak & Braak.
The surface rendering shows the six Braak-stages. Abnormal tau pathology is typically found first in Stage 1 (Entorhinal cortex) and then spreads to the hippocampus (Stage 2), the inferior temporal and limbic cortex (Stages 3&4) and finally to the association and primary somatosensory cortex (Stages 5&6). The plot on the left illustrates the likelihood of tau abnormality within each Braak stage across a group of AD patients, showing that early Braak stages typically show earlier tau abnormality than later Braak stages. Note that we usually discard the hippocampus when looking at tau-PET, because the tau-PET tracer shows quite significant off-target binding in the choroid plexus which is right next to the hippocampus. Thus, tau-PET signal in the hippocampus is quite confounded, at least for first-generation tau-PET tracers like flortaucipir.
Next, lets look at the distribution of tau pathology in the separate Braak stage ROIs. Tau-PET within these Braak-stage ROIs is labeled as tau.braak1.SUVR, tau.braak3.SUVR, tau.braak4.SUVR ,etc.
# plot group differences
# note that you can store plots in the workspace (e.g. p1 below) and later combine them using the patchwork package
p1 <- ggplot(data = ADNI.data.bl,
aes(
x = DX.Ab,
y = tau.braak1.SUVR)) +
geom_boxplot(alpha = 0.1, notch = T, width = 0.5) +
geom_beeswarm(alpha = 0.4) +
theme_minimal() +
geom_hline(yintercept = 1.3, linetype = 2) +
ggtitle("Braak 1")
p2 <- ggplot(data = ADNI.data.bl,
aes(
x = DX.Ab,
y = tau.braak3.SUVR)) +
geom_boxplot(alpha = 0.1, notch = T, width = 0.5) +
geom_beeswarm(alpha = 0.4) +
theme_minimal() +
geom_hline(yintercept = 1.3, linetype = 2) +
ggtitle("Braak 3")
p3 <- ggplot(data = ADNI.data.bl,
aes(
x = DX.Ab,
y = tau.braak4.SUVR)) +
geom_boxplot(alpha = 0.1, notch = T, width = 0.5) +
geom_beeswarm(alpha = 0.4) +
theme_minimal() +
geom_hline(yintercept = 1.3, linetype = 2) +
ggtitle("Braak 4")
p4 <- ggplot(data = ADNI.data.bl,
aes(
x = DX.Ab,
y = tau.braak5.SUVR)) +
geom_boxplot(alpha = 0.1, notch = T, width = 0.5) +
geom_beeswarm(alpha = 0.4) +
theme_minimal() +
geom_hline(yintercept = 1.3, linetype = 2) +
ggtitle("Braak 5")
p5 <- ggplot(data = ADNI.data.bl,
aes(
x = DX.Ab,
y = tau.braak6.SUVR)) +
geom_boxplot(alpha = 0.1, notch = T, width = 0.5) +
geom_beeswarm(alpha = 0.4) +
theme_minimal() +
geom_hline(yintercept = 1.3, linetype = 2) +
ggtitle("Braak 6")
# you can combine different plots using the patchwork package
# a "/" will add a new plot below, a "+" will add a plot next to the first one
p1 / p2 / p3 / p4 / p5
You can see that earlier Braak stage ROIs show abnormal signal more early during the disease course than later Braak stage ROIs
Our current understanding of Alzheimer’s disease suggests that amyloid-beta deposition is the first pathological event to happen, that triggers downstream tau accumulation, neurodegeneration and cognitive decline. Thus, tau and amyloid should be correlated. Let’s have a look:
ggplot(ADNI.data.bl,
aes(
x = centiloid,
y = tau.global.SUVR
)) +
geom_point(alpha = 0.5) +
geom_smooth(method = "lm") +
theme_minimal()
## `geom_smooth()` using formula 'y ~ x'
This shows that higher amyloid is clearly associated with higher tau pathology. We can quantify the association using correlation or linear regression controlling for age and sex
# correlation
rcorr(ADNI.data.bl$centiloid,
ADNI.data.bl$tau.global.SUVR)
## x y
## x 1.00 0.43
## y 0.43 1.00
##
## n= 315
##
##
## P
## x y
## x 0
## y 0
# linear regression controlling for age and sex
tmp.lm <- lm(data = ADNI.data.bl,
tau.global.SUVR ~ centiloid + age + sex); summary(tmp.lm)
##
## Call:
## lm(formula = tau.global.SUVR ~ centiloid + age + sex, data = ADNI.data.bl)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.28412 -0.06856 -0.01542 0.04292 1.14154
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 1.3685946 0.0874300 15.654 < 2e-16 ***
## centiloid 0.0017479 0.0001875 9.320 < 2e-16 ***
## age -0.0040437 0.0012088 -3.345 0.000923 ***
## sexM -0.0198439 0.0168082 -1.181 0.238662
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.145 on 311 degrees of freedom
## Multiple R-squared: 0.2233, Adjusted R-squared: 0.2158
## F-statistic: 29.8 on 3 and 311 DF, p-value: < 2.2e-16
Now that you know how to run group comparisons (i.e. ANCOVAs) and correlations/regression (linear models), try to check whether amyloid and tau are associated with cognition (e.g. MMSE and ADNI_MEM). How do amyloid and tau relate to cognitive performance? Try to create figures and statistics showing differences in cognitive performance between diagnostic groups, and associations between amyloid, tau and cognition.
Next lets go have a look at some fMRI parameters. We have prepared functional connectivity data using a common 200 ROI cortical atlas (see https://pubmed.ncbi.nlm.nih.gov/28981612/). A surface rendering of the atlas can be found below, including the seven networks (DMN = Default-Mode, DAN = Dorsal Attention, VAN = Ventral Attention, FPCN = Fronto-parietal control, Visual, Motor, Limbic)
Atlas based connectivity is usually stored in matrix format also referred to as adjacency matrix. For 200 ROIs this means that the resulting adjacency matrix 200x200 elements large, where each element corresponds to the connectivity-metric between two ROIs (e.g. Fisher-z transformed Pearson correlation). All connectivity matrices are stored in the “Schaefer200_functional_connectivity” folder on the hard drive in .csv format. This folder is already stored in your R worksapce as “dir.fc.200”. The paths to the matrices are stored in the “ADNI.data.bl” data frame in the variable “FC.mat.200” Lets load an example to see how this looks like.
# import and format connectivity matrix using the read.csv command
current.adj.matrix = read.csv(ADNI.data.bl$FC.mat.200[1])
When checking the dimension of this matrix, we realize that there are 201 columns and 200 rows. The first column contains the rownames, so we need to remove it in order to get a symmetrical 200x200 matrix
# check dimensions
dim(current.adj.matrix)
## [1] 200 201
# remove first column
current.adj.matrix.cleaned <- current.adj.matrix[,-1]
# clean diagnonal (i.e. remove infinite values with zeros)
diag(current.adj.matrix.cleaned) = 0
# transform to matrix format
current.adj.matrix.cleaned.matrix <- as.matrix(current.adj.matrix.cleaned)
# check dimensions
dim(current.adj.matrix.cleaned)
## [1] 200 200
Now the connectivity matrix has 200 rows and 200 columns. Lets have a look at the overall connectivity pattern. You can plot connectivity matrices using the “corrplot” package.
# plot correlation matrix
corrplot::corrplot(current.adj.matrix.cleaned.matrix,
diag = FALSE,
tl.pos = "n",
tl.cex = 0.5,
method = "color",
is.corr = FALSE)
You can see that the connectivity matrix is symmetrical, with positive correlations and negative correlations. Positive correlations (blue colors) mean that two brain regions show correlated BOLD signal, suggesting that they’re functionally connected. Negative correlations indicate anti-correlations, where one region is up-regulated and another one is down-regulated at the same time.
In order to perform graph theoretical analyses, we usually perform some additional preprocessing steps, e.g. we remove negative connections or perform thresholding of the matrix.
I’ve written a function that can perform some basic thresholding operations for you. The function below can be called from the command line, and need several input arguments adj_matrix = a symmetrical adjacency matrix retain_negatives = a boolean statement (TRUE/FALSE) indicating whether or not negative values should be retained density = a percentage of connections that should be retained. 1 indicates that 100% of connections should be retained, 0.3 means that only 30% of the strongest connections will be retained replace_with_NAs = a boolean statement (TRUE/FALSE) indicating whether values that are eliminated from the matrix should be replaced with an NA (not assigned). Otherwise, values are replaced with a zero.
# define matrix thresholding function
adj_mat_density_thresh=function(adj_matrix, retain_negatives, density, replace_with_NAs){
# exclude negative values
if (retain_negatives==F){adj_matrix[adj_matrix<0]=0}
# determine threshold
threshold=quantile(abs(adj_matrix), 1-density)
# apply threshold
if (replace_with_NAs==F){adj_matrix[abs(adj_matrix)<threshold]=0}
if (replace_with_NAs==T){adj_matrix[abs(adj_matrix)<threshold]=NA}
return(adj_matrix)
}
Next, lets threshold the matrix that we’ve imported previously and eliminate the negative values (i.e. retain_negatives = F), while keeping all positive values (i.e. density = 1). Everything else should be replaced with a zero.
# call the function and store the output in a new variable
current.adj.matrix.cleaned.matrix.thr1 <- adj_mat_density_thresh(adj_matrix = current.adj.matrix.cleaned.matrix,
retain_negatives = F,
density = 1,
replace_with_NAs = F)
Let’s see how the new matrix looks like:
# plot correlation matrix
corrplot::corrplot(current.adj.matrix.cleaned.matrix.thr1,
diag = FALSE,
tl.pos = "n",
tl.cex = 0.5,
method = "color",
is.corr = FALSE)
You can see that all the negative values have been eliminated, while the positive values have been kept.
Next, lets see what thresholding does to the matrix. Now, we want to retain only 10% of the strongest positive connections (i.e. a density of 0.1)
# call the function and store the output in a new variable
current.adj.matrix.cleaned.matrix.thr0p1 <- adj_mat_density_thresh(adj_matrix = current.adj.matrix.cleaned.matrix,
retain_negatives = F,
density = 0.1,
replace_with_NAs = F)
Let’s see how the thresholded matrix looks like:
# plot correlation matrix
corrplot::corrplot(current.adj.matrix.cleaned.matrix.thr0p1,
diag = FALSE,
tl.pos = "n",
tl.cex = 0.5,
method = "color",
is.corr = FALSE)
You can see that the matrix looks quite sparse now and only the “backbone” of very strong connections is retained. Sometimes thresholding is done to eliminate “noisy” and “weak” connections. However, these weak connections have actually been shown to encode some important inter-individual information, as shown by papers on “connectome fingerprinting” (see work by Emily Finn https://www.nature.com/articles/nn.4135)
While the matrix above is relatively sparse, it still includes information about the “strength” of a connection, which we refer to as a weighted matrix Some researchers, however, use binary matrices, which just encode the presence or absence of a connection. This is often done for structural connectivity matrices. Binarization can be done quite easily using the NetworkToolbox. Lets binarize the matrix in which we’ve retained only 10% of the strongest connections
current.adj.matrix.cleaned.matrix.thr0p1.bin <- NetworkToolbox::binarize(current.adj.matrix.cleaned.matrix.thr0p1)
# plot correlation matrix
corrplot::corrplot(current.adj.matrix.cleaned.matrix.thr0p1.bin,
diag = FALSE,
tl.pos = "n",
tl.cex = 0.5,
method = "color",
is.corr = FALSE)
Now, all information about the strength of connections is lost.
Next, lets use the connectivity matrix to compute some basic graph theoretical parameters.
For this, the “NetworkToolbox” is quite helpful, which comes with some easy to use commands (see https://cran.r-project.org/web/packages/NetworkToolbox/NetworkToolbox.pdf)
The clustering coefficient quantifies the abundance of connected triangles in a network and is a major descriptive statistics of networks. This means, if a node is connected to two neighbours, and if these neighbours are also interconnected, they form a cluster (triangle), as illustrated in the example below. Clustering is a measure of segregation, indicating how strong local communities are interconnected (e.g. whether your friends are also friends among each other)
So lets compute the clustering coefficient on our matrix. We can compute a “Global” clustering coefficient for the entire network, as well as a clustering coefficient for each single node in the network. To compute the clustering coefficient, we use the clustcoeff command from the NetworkToolbox. This command requires two inputs, an adjacency matrix and a TRUE/FALSE statement on whether the matrix is weighted or binary. Clustering is usually computed on networks with “positive” connections only, hence we need to use the matrix from which we have eliminated the negative connections called current.adj.matrix.cleaned.matrix.thr1 You can find information about this function by typing help(“clustcoeff”) in the console. We can compute the clustering coefficient for networks thresholded at a density of 1 and 0.1 to see how the clustering coefficient changes.
# compute clustering
tmp.clustering.weighted.thr1 <- clustcoeff(current.adj.matrix.cleaned.matrix.thr1, weighted = T)
tmp.clustering.weighted.thr0p1 <- clustcoeff(current.adj.matrix.cleaned.matrix.thr0p1, weighted = T)
The output contains two different measures, the global clustering coefficient CC, and the local clustering coefficient for each ROI called CCi. You can choose which one to look at by navigating with the $ sign.
# global clustering
tmp.clustering.weighted.thr1$CC
## [1] 0.18073
tmp.clustering.weighted.thr0p1$CC
## [1] 0.38338
# ROI-wise clustering
tmp.clustering.weighted.thr1$CCi
## X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13
## 0.183 0.151 0.187 0.202 0.108 0.188 0.161 0.201 0.224 0.229 0.199 0.194 0.195
## X14 X15 X16 X17 X18 X19 X20 X21 X22 X23 X24 X25 X26
## 0.203 0.127 0.180 0.150 0.170 0.162 0.171 0.179 0.192 0.203 0.240 0.266 0.271
## X27 X28 X29 X30 X31 X32 X33 X34 X35 X36 X37 X38 X39
## 0.286 0.278 0.296 0.269 0.189 0.164 0.171 0.178 0.172 0.170 0.191 0.242 0.169
## X40 X41 X42 X43 X44 X45 X46 X47 X48 X49 X50 X51 X52
## 0.242 0.212 0.282 0.157 0.163 0.138 0.129 0.175 0.162 0.173 0.170 0.151 0.180
## X53 X54 X55 X56 X57 X58 X59 X60 X61 X62 X63 X64 X65
## 0.189 0.291 0.160 0.168 0.162 0.212 0.206 0.156 0.143 0.176 0.162 0.129 0.138
## X66 X67 X68 X69 X70 X71 X72 X73 X74 X75 X76 X77 X78
## 0.121 0.101 0.121 0.125 0.169 0.136 0.131 0.181 0.180 0.151 0.181 0.167 0.174
## X79 X80 X81 X82 X83 X84 X85 X86 X87 X88 X89 X90 X91
## 0.191 0.167 0.187 0.150 0.155 0.071 0.164 0.130 0.142 0.143 0.096 0.140 0.207
## X92 X93 X94 X95 X96 X97 X98 X99 X100 X101 X102 X103 X104
## 0.200 0.193 0.207 0.213 0.221 0.168 0.158 0.176 0.091 0.160 0.182 0.206 0.195
## X105 X106 X107 X108 X109 X110 X111 X112 X113 X114 X115 X116 X117
## 0.204 0.133 0.143 0.133 0.199 0.188 0.146 0.182 0.187 0.192 0.172 0.164 0.153
## X118 X119 X120 X121 X122 X123 X124 X125 X126 X127 X128 X129 X130
## 0.155 0.118 0.151 0.147 0.138 0.172 0.278 0.289 0.240 0.275 0.297 0.275 0.267
## X131 X132 X133 X134 X135 X136 X137 X138 X139 X140 X141 X142 X143
## 0.246 0.296 0.251 0.282 0.211 0.158 0.182 0.253 0.227 0.182 0.208 0.242 0.236
## X144 X145 X146 X147 X148 X149 X150 X151 X152 X153 X154 X155 X156
## 0.281 0.235 0.241 0.123 0.163 0.136 0.130 0.197 0.145 0.131 0.146 0.152 0.212
## X157 X158 X159 X160 X161 X162 X163 X164 X165 X166 X167 X168 X169
## 0.182 0.238 0.149 0.125 0.169 0.156 0.199 0.199 0.141 0.237 0.172 0.175 0.139
## X170 X171 X172 X173 X174 X175 X176 X177 X178 X179 X180 X181 X182
## 0.139 0.169 0.134 0.136 0.184 0.206 0.228 0.172 0.139 0.160 0.132 0.230 0.131
## X183 X184 X185 X186 X187 X188 X189 X190 X191 X192 X193 X194 X195
## 0.130 0.168 0.170 0.155 0.149 0.157 0.177 0.172 0.095 0.139 0.146 0.121 0.212
## X196 X197 X198 X199 X200
## 0.173 0.196 0.179 0.155 0.159
hist(tmp.clustering.weighted.thr1$CCi)
tmp.clustering.weighted.thr0p1$CCi
## X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13
## 0.461 0.281 0.434 0.392 0.452 0.435 0.312 0.359 0.352 0.368 0.422 0.386 0.426
## X14 X15 X16 X17 X18 X19 X20 X21 X22 X23 X24 X25 X26
## 0.265 0.471 0.284 0.342 0.269 0.340 0.209 0.441 0.287 0.477 0.526 0.571 0.672
## X27 X28 X29 X30 X31 X32 X33 X34 X35 X36 X37 X38 X39
## 0.620 0.520 0.577 0.516 0.472 0.563 0.253 0.201 0.389 0.333 0.439 0.433 0.383
## X40 X41 X42 X43 X44 X45 X46 X47 X48 X49 X50 X51 X52
## 0.415 0.468 0.578 0.341 0.302 0.238 0.437 0.378 0.316 0.305 0.343 0.344 0.307
## X53 X54 X55 X56 X57 X58 X59 X60 X61 X62 X63 X64 X65
## 0.288 0.562 0.483 0.549 0.462 0.434 0.397 0.464 0.537 0.458 0.366 0.000 0.353
## X66 X67 X68 X69 X70 X71 X72 X73 X74 X75 X76 X77 X78
## 0.169 0.164 0.331 0.256 0.340 0.378 0.373 0.251 0.436 0.573 0.477 0.505 0.323
## X79 X80 X81 X82 X83 X84 X85 X86 X87 X88 X89 X90 X91
## 0.225 0.416 0.258 0.469 0.346 0.000 0.239 0.339 0.377 0.209 0.000 0.364 0.494
## X92 X93 X94 X95 X96 X97 X98 X99 X100 X101 X102 X103 X104
## 0.500 0.459 0.486 0.574 0.332 0.359 0.280 0.368 0.000 0.484 0.371 0.409 0.407
## X105 X106 X107 X108 X109 X110 X111 X112 X113 X114 X115 X116 X117
## 0.428 0.502 0.360 0.509 0.389 0.310 0.220 0.432 0.363 0.289 0.294 0.242 0.214
## X118 X119 X120 X121 X122 X123 X124 X125 X126 X127 X128 X129 X130
## 0.324 0.523 0.371 0.155 0.290 0.312 0.635 0.530 0.579 0.611 0.579 0.589 0.583
## X131 X132 X133 X134 X135 X136 X137 X138 X139 X140 X141 X142 X143
## 0.557 0.594 0.609 0.587 0.427 0.272 0.554 0.408 0.580 0.410 0.356 0.469 0.476
## X144 X145 X146 X147 X148 X149 X150 X151 X152 X153 X154 X155 X156
## 0.588 0.522 0.585 0.146 0.151 0.199 0.127 0.429 0.344 0.375 0.265 0.321 0.208
## X157 X158 X159 X160 X161 X162 X163 X164 X165 X166 X167 X168 X169
## 0.278 0.614 0.762 0.196 0.289 0.427 0.382 0.499 0.199 0.287 0.428 0.458 0.267
## X170 X171 X172 X173 X174 X175 X176 X177 X178 X179 X180 X181 X182
## 0.312 0.236 0.154 0.197 0.271 0.276 0.513 0.473 0.267 0.368 0.309 0.571 0.487
## X183 X184 X185 X186 X187 X188 X189 X190 X191 X192 X193 X194 X195
## 0.239 0.338 0.510 0.567 0.271 0.446 0.266 0.356 0.173 0.216 0.292 0.290 0.576
## X196 X197 X198 X199 X200
## 0.494 0.481 0.288 0.422 0.340
hist(tmp.clustering.weighted.thr0p1$CCi)
# correlation of clustering coefficients across different thresholds
plot(tmp.clustering.weighted.thr1$CCi, tmp.clustering.weighted.thr0p1$CCi)
rcorr(tmp.clustering.weighted.thr1$CCi, tmp.clustering.weighted.thr0p1$CCi)
## x y
## x 1.00 0.63
## y 0.63 1.00
##
## n= 200
##
##
## P
## x y
## x 0
## y 0
You can see that clustering is much higher in the network that was thresholded more restrictively. Any idea/hypothesis why this could be the case?
The degree of a node quantifies the number or strength of connections that a given node has to the rest of the network (see figure below)
The degree can also be computed relatively simple as the row or column Means of a adjacency matrix, which just quantifies the strength of connections that a given node has to the remaining 199 nodes.
# compute degree
tmp.degree.weighted.thr1 <- colMeans(current.adj.matrix.cleaned.matrix.thr1, na.rm = T)
hist(tmp.degree.weighted.thr1)
tmp.degree.weighted.thr0p1 <- colMeans(current.adj.matrix.cleaned.matrix.thr0p1, na.rm = T)
hist(tmp.degree.weighted.thr0p1)
# correlation of clustering coefficients across different thresholds
plot(tmp.degree.weighted.thr1,tmp.degree.weighted.thr0p1)
rcorr(tmp.degree.weighted.thr1,tmp.degree.weighted.thr0p1)
## x y
## x 1.00 0.83
## y 0.83 1.00
##
## n= 200
##
##
## P
## x y
## x 0
## y 0
We can also compute the degree of a node to ROIs of his own network (i.e. nodes surrounded by a gray circle), as a measure of mean network connectivity. An example of this would be, how strong the posterior-cingulate cortex (i.e. a typical hub of the Default-Mode Network) is connected to the remaining DMN nodes.
To compute the connectivity of a given node to members of his own network, we need to know first, to which network a given node belongs to. To this end, we can use the community vector.
# load community vector
Community <- read.table(paste0(dir.root, "Atlas/Schaefer2018_200Parcels_7Networks_order_numeric.txt"))
Community$names <- mapvalues(Community$V1,
from=c(1,2,3,4,5,6,7),
to=c("Visual", "Motor", "DAN", "VAN", "Limbic", "FPCN", "DMN"))
# check the number of ROIs assigned to each node
table(Community$names)
##
## DAN DMN FPCN Limbic Motor VAN Visual
## 26 46 30 12 35 22 29
Next, lets compute the community strength. We need three input arguments, A = the connectivity matrix comm = the community vector (i.e. which network does a node belong to) weighted = a boolean (TRUE/FALSE), stating whether the network is weighted or not
tmp.comm.str.within = comm.str(A = current.adj.matrix.cleaned.matrix.thr1,
comm = Community$names,
weighted = T,
measure = c("within"))
print(tmp.comm.str.within)
## DAN DMN FPCN Limbic Motor VAN Visual
## 1033.352 1383.617 980.124 378.626 1290.076 739.307 1018.256
The output reflects the strength of connections among the networks own nodes
Next, we want to determine a measure of network integration, called shortest path length. This measure quantifies, how many steps (connections) a given node is away from any other node in the network on average. The measure was originally developed in social psychology, showing that a given person may know any other person in the world via just seven other people (or connections). This phenonenon was thus termed “small-world”. Accordingly, a network that is segregated in modules, but shows shortcuts between the modules is called a “small-world” network.
So what we will do now is to determine the path length of each node, that is how “far” in terms of connectedness is each node away from all the remaining nodes in the network. Again, the computation is super simple using the pathlengths command of the NetworkToolbox, but I recommend some reading if you’re interested understand the maths behind it (e.g. the landmark paper by Rubinov & Sporns https://pubmed.ncbi.nlm.nih.gov/19819337/). For more information on the pathlengths command, run help(“pathlengths”) from the console. The function requires two inputs, A = the adjacency matrix and weighted = a TRUE/FALSE statement indicating whether the matrix is weighted or not.
# compute path lengths
tmp.pathlengths <- pathlengths(current.adj.matrix.cleaned.matrix.thr1,
weighted = TRUE)
The output contains several measures, including a global measure of average shortest path length called ASPL, which is the mean path length of the entire network. A network with on average short path length tends to be relatively integrated with fast communication among its’ nodes. In addition, we get a path length measure for each node, called ASPLi
# average shortest path length of the entire network
tmp.pathlengths$ASPL
## [1] 3.739728
# average shortest path length of each node
tmp.pathlengths$ASPLi
## X1 X2 X3 X4 X5 X6 X7 X8
## 3.686793 3.938371 3.421287 3.587114 4.189089 3.621968 3.996264 3.827001
## X9 X10 X11 X12 X13 X14 X15 X16
## 3.491819 3.404647 3.640254 3.568840 3.662157 3.790440 4.192435 3.600323
## X17 X18 X19 X20 X21 X22 X23 X24
## 4.152767 3.889138 3.759094 3.546720 3.610486 3.208760 3.495877 3.557310
## X25 X26 X27 X28 X29 X30 X31 X32
## 3.504104 3.632118 3.553817 3.618841 3.668890 3.611785 3.236675 3.887514
## X33 X34 X35 X36 X37 X38 X39 X40
## 3.599817 3.346027 3.564644 3.628343 3.567697 3.422160 3.457815 3.316711
## X41 X42 X43 X44 X45 X46 X47 X48
## 3.627064 3.662050 3.651143 3.770416 3.774117 3.841425 3.963985 3.970873
## X49 X50 X51 X52 X53 X54 X55 X56
## 4.171795 4.091379 3.714035 3.124018 3.371867 3.666373 3.579043 4.052180
## X57 X58 X59 X60 X61 X62 X63 X64
## 3.565556 3.179551 3.171100 3.770616 4.003131 3.717592 3.594500 4.451912
## X65 X66 X67 X68 X69 X70 X71 X72
## 4.291514 4.266782 4.054475 4.201641 4.018221 3.575191 3.871161 3.767981
## X73 X74 X75 X76 X77 X78 X79 X80
## 3.217069 3.328937 3.540883 3.595778 3.943491 3.972977 4.216667 3.783807
## X81 X82 X83 X84 X85 X86 X87 X88
## 3.898053 4.034913 4.225129 5.007757 3.983598 4.036498 4.167132 3.813469
## X89 X90 X91 X92 X93 X94 X95 X96
## 3.986264 3.593188 3.981699 4.057075 3.630816 3.687564 3.755381 3.625598
## X97 X98 X99 X100 X101 X102 X103 X104
## 3.740095 3.535256 3.608260 5.192908 3.473409 3.583136 3.435899 3.800310
## X105 X106 X107 X108 X109 X110 X111 X112
## 3.284871 4.291146 4.025702 4.193787 3.610934 3.494363 3.810400 3.791290
## X113 X114 X115 X116 X117 X118 X119 X120
## 3.558172 3.585095 3.504551 3.737817 3.930120 4.107287 4.203372 3.923864
## X121 X122 X123 X124 X125 X126 X127 X128
## 3.791177 3.972195 3.401683 3.654967 3.608598 3.510104 3.780099 3.581893
## X129 X130 X131 X132 X133 X134 X135 X136
## 3.743675 3.558280 3.607661 3.662662 3.798593 3.607188 3.231204 3.856260
## X137 X138 X139 X140 X141 X142 X143 X144
## 3.824222 3.451501 3.676566 3.616859 3.397651 3.499816 3.514700 3.584283
## X145 X146 X147 X148 X149 X150 X151 X152
## 3.549439 3.610638 3.878394 3.695994 3.765435 3.635823 3.626423 4.199910
## X153 X154 X155 X156 X157 X158 X159 X160
## 4.217784 3.748852 4.006887 3.097860 3.356211 3.845062 4.784067 4.141500
## X161 X162 X163 X164 X165 X166 X167 X168
## 3.531139 3.818513 3.198978 3.159380 3.444612 3.361998 3.528348 3.629794
## X169 X170 X171 X172 X173 X174 X175 X176
## 4.212174 3.959135 3.280496 3.520638 3.774908 3.394539 3.322744 3.578837
## X177 X178 X179 X180 X181 X182 X183 X184
## 3.775900 3.512600 3.331703 3.598931 3.716219 3.820170 3.597972 3.939428
## X185 X186 X187 X188 X189 X190 X191 X192
## 3.677294 3.747420 4.258431 4.211586 4.083519 4.117704 4.774315 3.913690
## X193 X194 X195 X196 X197 X198 X199 X200
## 3.538297 3.906524 3.939820 3.614543 3.499640 3.628900 3.765883 3.672455
hist(tmp.pathlengths$ASPLi)
Speaking of small-world networks, we can also directly quantify how much a given network architecture resembles a small world network. Here’s a short definition of small-world networks: A small-world network is a type of mathematical graph in which most nodes are not neighbors of one another, but the neighbors of any given node are likely to be neighbors of each other and most nodes can be reached from every other node by a small number of hops or steps. In other words, small-world networks tend to be an intermediate phenotype between a regular network and a random network (see figure below). In biology, networks tend to orient towards small-world topology (see for instance https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3604768/), which seems to be a particularly cost-efficient network topology.
To compute the “small-worldness” of a given network, we can use the smallworldness command of the NetworkToolbox. This requires several inputs. A = an adjacency matrix, iter = the number of iteratively generated random networks with equivalent weight and degree distribution that are generated against which “your” network is compared against. Note that computing this parameter takes some time, and the computation time gets longer the more iterations you select. For help, call help(“smallworldness”)
# set a fixed seed to get reproducible results when generating random data
set.seed(42)
# compute small-worldness coefficient
tmp.smallworldness <- smallworldness(current.adj.matrix.cleaned.matrix.thr1,
iter = 10,
progBar = FALSE,
method = "HG")
# show small-world coefficient
print(tmp.smallworldness$swm)
## [1] 1.138257
It’s also quite useful to compare different graph theoretical metrics, because many of them are highly inter-related. When we compare the average shortest path length (x-axis) as a measure of integration with another measure of segregation (e.g. clustering, y-axis), we can see that nodes with short path length also tend to show higher clustering. This is because a node with many connections to his neighbours can also reach the remaining network faster via less connections.
plot(tmp.pathlengths$ASPLi, tmp.clustering.weighted.thr1$CCi)
rcorr(tmp.pathlengths$ASPLi, tmp.clustering.weighted.thr1$CCi)
## x y
## x 1.0 -0.5
## y -0.5 1.0
##
## n= 200
##
##
## P
## x y
## x 0
## y 0
You can also have a look at how these metrics distribute across the different networks
# path length
boxplot(tmp.pathlengths$ASPLi~Community$names)
# clustering
boxplot(tmp.clustering.weighted.thr1$CCi~Community$names)
# degree
boxplot(tmp.degree.weighted.thr1~Community$names)
Now that you’ve learned about some basic graph theoretical parameters (there are many more!), it’s time to learn how to compute these measures for larger groups of subjects. Running the code above “by hand” for every individual from the current dataset would possible take ages (…and it would be super boring…). So lets do what I call “applied laziness” and run everything automatically using looped scripts and functions. The basic idea is that we define a set of fixed analysis steps and repeat them iteratively across many individuals. The simplest way to do this is to use for loops (see for instance https://www.datacamp.com/community/tutorials/tutorial-on-loops-in-r). And while the computer is getting the work done for you, you can go and do whatever you want…while still getting paid for your job… ;)
So we need to define a basic analysis scheme that we would like to apply to our data. For a given subject, we first need to read in the connectivity matrix, apply some thresholding and preprocessing, and then compute measures of clustering, degree, community strength, path-length and small-worldness. Basically we’ll repeat all the steps above in a single script. To keep it rather simple, we compute the clustering coefficient, degree, and path-length measures for each node and then summarize these measures for the whole brain and for the seven different networks. Note that we will overwrite some of the file names in the code above.
# define your input data
input.data = ADNI.data.bl
# define the number of iterations that need to be performed.
# In our data frame, each row corresponds to one subject, so we want to iterate across rows.
# Thus, the number of iterations corresponds to the number of rows of our input data
n.iter = nrow(input.data)
print(paste0("number of iterations = ", n.iter))
## [1] "number of iterations = 315"
# Next, open the loop, and define that it runs all
# the way from 1 to the number of iterations that you've defined as n.iter
# within the loop, you will use the letter i which will change in each iteration from 1, to 2, to 3 to n.iter
n = 0
for (i in 1:n.iter){
svMisc::progress(i, max.value = n.iter)
# set an internal counter within the loop (can be helpful sometimes!)
n = n +1
# select the i-th observation in the data frame
current.data = input.data[i,]
# import the connectivity matrix
current.adj.matrix = read.csv(current.data$FC.mat.200[1])
# remove first column
current.adj.matrix.cleaned <- current.adj.matrix[,-1]
# clean diagnonal (i.e. remove infinite values with zeros)
diag(current.adj.matrix.cleaned) = 0
# transform to matrix format
current.adj.matrix.cleaned.matrix <- as.matrix(current.adj.matrix.cleaned)
# remove negative connections and perform density thresholding at a density of 1
current.adj.matrix.cleaned.matrix.thr1 <- adj_mat_density_thresh(adj_matrix = current.adj.matrix.cleaned.matrix,
retain_negatives = F,
density = 1,
replace_with_NAs = F)
# compute clustering coefficient
current.clustering.weighted.thr1 <- clustcoeff(current.adj.matrix.cleaned.matrix.thr1, weighted = T)
# compute degree
current.degree.weighted.thr1 <- colMeans(current.adj.matrix.cleaned.matrix.thr1, na.rm = T)
# compute community strength
current.comm.str.within = comm.str(A = current.adj.matrix.cleaned.matrix.thr1,
comm = Community$names,
weighted = T,
measure = c("within"))
# compute participation coefficient
current.participation = participation(current.adj.matrix.cleaned.matrix.thr1, comm = Community$names)
# compute path lengths
#current.pathlengths <- pathlengths(current.adj.matrix.cleaned.matrix.thr1,
# weighted = TRUE)
#
## compute small-worldness coefficient
#current.smallworldness <- smallworldness(current.adj.matrix.cleaned.matrix.thr1,
# iter = 1,
# progBar = FALSE,
# method = "HG")
# summarize clustering, degree and path length by networks
# to this end we first create a data frame with all the data and the network affiliation vector
tmp.df = data.frame(clustering = current.clustering.weighted.thr1$CCi,
degree = current.degree.weighted.thr1,
participation = current.participation$positive,
network = Community$names)
# next we create the mean of each metric for each network in long format
tmp.df.summary <-
tmp.df %>%
group_by(network) %>%
summarise(clustering = mean(clustering),
degree = mean(degree),
pathlength = mean(pathlengths),
participation = mean(participation))
# add subject ID
tmp.df.summary$ID <- current.data$ID
# concatenate network-specific data
if(n == 1){tmp.df.summary.concat = tmp.df.summary}
if(n > 1){tmp.df.summary.concat = rbind(tmp.df.summary.concat, tmp.df.summary)}
# forward small-worldness, average shortest path length and community strength to the main data frame
#current.data$smallworldness <- current.smallworldness$swm
#current.data$ASPL <- current.pathlengths$ASPL
current.data$degree.global = mean(current.degree.weighted.thr1)
current.data$clustering.global = current.clustering.weighted.thr1$CC
current.data$participation.global = mean(current.participation$overall)
# adding in the community strength data is a bit more complicated, let me know if you want that explained in further detail
current.comm.str.within.reshaped = data.frame(t(current.comm.str.within))
names(current.comm.str.within.reshaped) <- paste0("comm.str.", names(current.comm.str.within.reshaped))
current.data <- cbind(current.data, current.comm.str.within.reshaped)
# concatenate main data frame
if (n == 1){current.data.concat = current.data}
if (n > 1){current.data.concat = rbind(current.data.concat,current.data)}
}
## Progress: 1 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 2 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 3 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 4 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 5 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 6 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 7 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 8 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 9 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 10 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 11 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 12 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 13 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 14 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 15 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 16 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 17 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 18 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 19 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 20 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 21 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 22 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 23 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 24 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 25 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 26 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 27 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 28 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 29 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 30 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 31 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 32 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 33 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 34 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 35 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 36 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 37 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 38 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 39 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 40 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 41 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 42 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 43 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 44 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 45 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 46 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 47 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 48 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 49 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 50 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 51 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 52 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 53 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 54 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 55 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 56 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 57 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 58 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 59 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 60 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 61 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 62 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 63 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 64 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 65 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 66 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 67 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 68 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 69 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 70 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 71 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 72 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 73 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 74 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 75 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 76 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 77 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 78 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 79 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 80 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 81 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 82 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 83 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 84 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 85 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 86 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 87 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 88 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 89 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 90 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 91 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 92 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 93 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 94 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 95 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 96 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 97 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 98 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 99 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 100 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 101 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 102 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 103 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 104 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 105 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 106 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 107 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 108 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 109 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 110 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 111 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 112 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 113 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 114 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 115 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 116 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 117 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 118 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 119 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 120 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 121 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 122 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 123 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 124 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 125 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 126 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 127 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 128 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 129 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 130 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 131 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 132 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 133 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 134 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 135 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 136 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 137 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 138 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 139 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 140 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 141 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 142 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 143 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 144 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 145 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 146 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 147 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 148 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 149 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 150 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 151 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 152 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 153 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 154 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 155 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 156 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 157 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 158 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 159 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 160 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 161 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 162 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 163 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 164 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 165 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 166 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 167 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 168 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 169 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 170 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 171 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 172 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 173 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 174 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 175 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 176 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 177 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 178 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 179 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 180 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 181 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 182 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 183 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 184 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 185 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 186 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 187 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 188 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 189 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 190 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 191 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 192 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 193 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 194 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 195 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 196 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 197 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 198 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 199 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 200 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 201 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 202 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 203 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 204 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 205 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 206 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 207 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 208 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 209 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 210 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 211 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 212 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 213 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 214 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 215 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 216 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 217 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 218 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 219 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 220 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 221 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 222 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 223 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 224 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 225 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 226 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 227 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 228 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 229 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 230 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 231 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 232 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 233 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 234 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 235 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 236 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 237 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 238 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 239 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 240 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 241 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 242 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 243 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 244 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 245 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 246 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 247 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 248 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 249 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 250 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 251 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 252 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 253 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 254 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 255 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 256 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 257 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 258 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 259 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 260 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 261 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 262 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 263 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 264 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 265 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 266 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 267 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 268 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 269 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 270 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 271 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 272 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 273 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 274 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 275 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 276 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 277 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 278 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 279 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 280 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 281 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 282 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 283 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 284 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 285 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 286 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 287 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 288 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 289 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 290 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 291 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 292 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 293 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 294 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 295 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 296 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 297 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 298 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 299 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 300 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 301 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 302 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 303 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 304 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 305 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 306 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 307 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 308 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 309 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 310 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 311 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 312 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 313 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 314 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
## Progress: 315 on 315
## `summarise()` ungrouping output (override with `.groups` argument)
# reshape network specific path length, clustering and degree
tmp.df.summary.concat2 <- data.frame(tmp.df.summary.concat)
tmp.df.wide = reshape(tmp.df.summary.concat2, idvar = "ID", timevar = "network", direction = "wide")
# merge all data together
output.data = merge(current.data.concat, tmp.df.wide, by = "ID")
tmp.data = subset(output.data, DX.Ab !="CN.Ab.neg")
boxplot(output.data$participation.DMN ~ output.data$DX.Ab)
# network decline
tmp.aov <- aov(data = output.data,
participation.DMN ~ DX.Ab + age + sex); summary(tmp.aov); TukeyHSD(tmp.aov, which = "DX.Ab")
## Df Sum Sq Mean Sq F value Pr(>F)
## DX.Ab 3 0.0317 0.010579 3.936 0.00886 **
## age 1 0.0099 0.009858 3.668 0.05640 .
## sex 1 0.0241 0.024105 8.969 0.00297 **
## Residuals 309 0.8305 0.002688
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## Warning in replications(paste("~", xx), data = mf): non-factors ignored: age
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = participation.DMN ~ DX.Ab + age + sex, data = output.data)
##
## $DX.Ab
## diff lwr upr p adj
## CN.Ab.pos-CN.Ab.neg 0.001464213 -0.01753724 0.020465663 0.9972029
## MCI.Ab.pos-CN.Ab.neg -0.008160736 -0.03220387 0.015882401 0.8169144
## Dementia.Ab.pos-CN.Ab.neg -0.041177880 -0.07338402 -0.008971739 0.0058690
## MCI.Ab.pos-CN.Ab.pos -0.009624949 -0.03705353 0.017803628 0.8014350
## Dementia.Ab.pos-CN.Ab.pos -0.042642093 -0.07744856 -0.007835630 0.0092136
## Dementia.Ab.pos-MCI.Ab.pos -0.033017144 -0.07081274 0.004778455 0.1106865
# reserve
tmp.data = subset(output.data, DX.Ab !="CN.Ab.neg")
ggplot(tmp.data,
aes(
x = tau.braak3.SUVR,
y = ADNI_MEM,
colour = as.factor(ntile(participation.FPCN, 2))
)) +
geom_point() +
geom_smooth(method = "lm")
## `geom_smooth()` using formula 'y ~ x'
tmp.lm <- lm(data = tmp.data,
ADNI_MEM ~ I(tau.braak3.SUVR + tau.braak1.SUVR + tau.braak4.SUVR) + participation.FPCN); summary(tmp.lm)
##
## Call:
## lm(formula = ADNI_MEM ~ I(tau.braak3.SUVR + tau.braak1.SUVR +
## tau.braak4.SUVR) + participation.FPCN, data = tmp.data)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.73344 -0.41385 -0.06292 0.45204 1.78470
##
## Coefficients:
## Estimate Std. Error
## (Intercept) 1.64726 0.53766
## I(tau.braak3.SUVR + tau.braak1.SUVR + tau.braak4.SUVR) -0.58434 0.09373
## participation.FPCN 4.59480 1.95199
## t value Pr(>|t|)
## (Intercept) 3.064 0.0027 **
## I(tau.braak3.SUVR + tau.braak1.SUVR + tau.braak4.SUVR) -6.234 7.01e-09 ***
## participation.FPCN 2.354 0.0202 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.7105 on 120 degrees of freedom
## Multiple R-squared: 0.2679, Adjusted R-squared: 0.2557
## F-statistic: 21.96 on 2 and 120 DF, p-value: 7.475e-09